2 research outputs found

    Better Sparsifiers for Directed Eulerian Graphs

    Full text link
    Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of fundamental problems on random walks, such as computing stationary distribution, hitting and commute time, and personalized PageRank vectors. While spectral sparsification is well understood for undirected graphs and it is known that for every graph G,G, (1+ε)(1+\varepsilon)-sparsifiers with O(nε2)O(n\varepsilon^{-2}) edges exist [Batson-Spielman-Srivastava, STOC '09] (which is optimal), the best known constructions of Eulerian sparsifiers require Ω(nε2log4n)\Omega(n\varepsilon^{-2}\log^4 n) edges and are based on short-cycle decompositions [Chu et al., FOCS '18]. In this paper, we give improved constructions of Eulerian sparsifiers, specifically: 1. We show that for every directed Eulerian graph G,\vec{G}, there exist an Eulerian sparsifier with O(nε2log2nlog2logn+nε4/3log8/3n)O(n\varepsilon^{-2} \log^2 n \log^2\log n + n\varepsilon^{-4/3}\log^{8/3} n) edges. This result is based on combining short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18, SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix Spencer conjecture [Bansal-Meka-Jiang, STOC '23]. 2. We give an improved analysis of the constructions based on short-cycle decompositions, giving an m1+δm^{1+\delta}-time algorithm for any constant δ>0\delta > 0 for constructing Eulerian sparsifiers with O(nε2log3n)O(n\varepsilon^{-2}\log^3 n) edges

    Training Private Models That Know What They Don't Know

    Full text link
    Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge. This challenge is further exacerbated when learning has to be differentially private: protection provided to sensitive data comes at the price of injecting additional randomness into the learning process. In this work, we conduct a thorough empirical investigation of selective classifiers -- that can abstain when they are unsure -- under a differential privacy constraint. We find that several popular selective prediction approaches are ineffective in a differentially private setting as they increase the risk of privacy leakage. At the same time, we identify that a recent approach that only uses checkpoints produced by an off-the-shelf private learning algorithm stands out as particularly suitable under DP. Further, we show that differential privacy does not just harm utility but also degrades selective classification performance. To analyze this effect across privacy levels, we propose a novel evaluation mechanism which isolate selective prediction performance across model utility levels. Our experimental results show that recovering the performance level attainable by non-private models is possible but comes at a considerable coverage cost as the privacy budget decreases
    corecore